Relation extraction (RE), which has relied on structurally annotated corpora for model training, has been particularly challenging in low-resource scenarios and domains. Recent literature has tackled low-resource RE by self-supervised learning, where the solution involves pretraining the relation embedding by RE-based objective and finetuning on labeled data by classification-based objective. However, a critical challenge to this approach is the gap in objectives, which prevents the RE model from fully utilizing the knowledge in pretrained representations. In this paper, we aim at bridging the gap and propose to pretrain and finetune the RE model using consistent objectives of contrastive learning. Since in this kind of representation learning paradigm, one relation may easily form multiple clusters in the representation space, we further propose a multi-center contrastive loss that allows one relation to form multiple clusters to better align with pretraining. Experiments on two document-level RE datasets, BioRED and Re-DocRED, demonstrate the effectiveness of our method. Particularly, when using 1% end-task training data, our method outperforms PLM-based RE classifier by 10.5% and 5.8% on the two datasets, respectively.
translated by 谷歌翻译
Relation Extraction (RE) has been extended to cross-document scenarios because many relations are not simply described in a single document. This inevitably brings the challenge of efficient open-space evidence retrieval to support the inference of cross-document relations, along with the challenge of multi-hop reasoning on top of entities and evidence scattered in an open set of documents. To combat these challenges, we propose Mr.CoD, a multi-hop evidence retrieval method based on evidence path mining and ranking with adapted dense retrievers. We explore multiple variants of retrievers to show evidence retrieval is an essential part in cross-document RE. Experiments on CodRED show that evidence retrieval with Mr.Cod effectively acquires cross-document evidence that essentially supports open-setting cross-document RE. Additionally, we show that Mr.CoD facilitates evidence retrieval and boosts end-to-end RE performance with effective multi-hop reasoning in both closed and open settings of RE.
translated by 谷歌翻译
Two key obstacles in biomedical relation extraction (RE) are the scarcity of annotations and the prevalence of instances without explicitly pre-defined labels due to low annotation coverage. Existing approaches, which treat biomedical RE as a multi-class classification task, often result in poor generalization in low-resource settings and do not have the ability to make selective prediction on unknown cases but give a guess from seen relations, hindering the applicability of those approaches. We present NBR, which converts biomedical RE as natural language inference formulation through indirect supervision. By converting relations to natural language hypotheses, NBR is capable of exploiting semantic cues to alleviate annotation scarcity. By incorporating a ranking-based loss that implicitly calibrates abstinent instances, NBR learns a clearer decision boundary and is instructed to abstain on uncertain instances. Extensive experiments on three widely-used biomedical RE benchmarks, namely ChemProt, DDI and GAD, verify the effectiveness of NBR in both full-set and low-resource regimes. Our analysis demonstrates that indirect supervision benefits biomedical RE even when a domain gap exists, and combining NLI knowledge with biomedical knowledge leads to the best performance gains.
translated by 谷歌翻译
Data Augmentation (DA) is frequently used to automatically provide additional training data without extra human annotation. However, data augmentation may introduce noisy data that impairs training. To guarantee the quality of augmented data, existing methods either assume no noise exists in the augmented data and adopt consistency training or use simple heuristics such as training loss and diversity constraints to filter out ``noisy'' data. However, those filtered examples may still contain useful information, and dropping them completely causes loss of supervision signals. In this paper, based on the assumption that the original dataset is cleaner than the augmented data, we propose an on-the-fly denoising technique for data augmentation that learns from soft augmented labels provided by an organic teacher model trained on the cleaner original data. A simple self-regularization module is applied to force the model prediction to be consistent across two distinct dropouts to further prevent overfitting on noisy labels. Our method can be applied to augmentation techniques in general and can consistently improve the performance on both text classification and question-answering tasks.
translated by 谷歌翻译
Neural language models (LMs) have achieved impressive results on various language-based reasoning tasks by utilizing latent knowledge encoded in their own pretrained parameters. To make this reasoning process more explicit, recent works retrieve a rationalizing LM's internal knowledge by training or prompting it to generate free-text rationales, which can be used to guide task predictions made by either the same LM or a separate reasoning LM. However, rationalizing LMs require expensive rationale annotation and/or computation, without any assurance that their generated rationales improve LM task performance or faithfully reflect LM decision-making. In this paper, we propose PINTO, an LM pipeline that rationalizes via prompt-based learning, and learns to faithfully reason over rationales via counterfactual regularization. First, PINTO maps out a suitable reasoning process for the task input by prompting a frozen rationalizing LM to generate a free-text rationale. Second, PINTO's reasoning LM is fine-tuned to solve the task using the generated rationale as context, while regularized to output less confident predictions when the rationale is perturbed. Across four datasets, we show that PINTO significantly improves the generalization ability of the reasoning LM, yielding higher performance on both in-distribution and out-of-distribution test sets. Also, we find that PINTO's rationales are more faithful to its task predictions than those generated by competitive baselines.
translated by 谷歌翻译
近年来,视觉模型(VLMS)在视觉推理任务(例如属性,位置)上表现出色。尽管这些任务衡量了必要的知识以在给定的视觉实例上进行理性和理性,但是它们却不能衡量VLMS保留和概括此类知识的能力。在这项工作中,我们评估了他们获得“可见”物理知识的能力 - 从静态场景的图像,尤其是在对象颜色,大小和空间的尺寸的范围内,可以轻松访问的信息。我们建立了自动管道,以获取综合知识资源,用于校准和探索这些模型。我们的结果表明,在所有三个任务中,模型和人类绩效之间存在严重的差距。此外,我们的字幕预测的基线(Capbert)在大小和空间任务上都大大优于VLM,这强调了,尽管有足够的视觉方式访问了地面语言,但它们仍在努力保留此类知识。数据集和代码可在https://github.com/axe--/viphy上找到。
translated by 谷歌翻译
最近,对时间变化的知识图或时间知识图(TKG)的学术兴趣越来越高。先前的研究表明,使用历史信息的TKG推理的多种方法。但是,在不同时间戳上此类信息中对层次结构的关注较少。鉴于TKG是基于时间的一系列知识图,因此序列中的年代学衍生了图之间的层次结构。此外,每个知识图都有其层次结构级别,可能相互不同。为了解决TKG中的这些层次结构特征,我们提出了HyperVC,它利用比欧几里得空间更好地编码层次结构的双曲线空间。不同时间戳上知识图之间的时间顺序结构是通过将知识图作为矢量嵌入通用双曲线空间中的矢量来表示的。此外,通过调整其实体和关系的双曲线嵌入的曲率来表示,知识图的各种层次级别。四个基准数据集的实验显示出很大的改进,尤其是在层次级别较高的数据集上。
translated by 谷歌翻译
诸如“玻璃可以用于饮用水”之类的先决条件的推理仍然是语言模型的开放问题。主要的挑战在于,前提数据的稀缺性以及模型对这种推理的缺乏支持。我们提出了粉红色的,预处理性的推论,并通过弱监督进行了改进的模型,用于通过最低限度的监督来推理前提条件。我们从经验和理论上表明,粉红色改善了基准的结果,该基准的重点是通过常识性知识的前提(高达40%的宏F1分数)进行推理。我们通过Pac-Bayesian信息分析,精确度量和消融研究进一步研究粉红色。
translated by 谷歌翻译
深度神经网络通常过度分辨,并且可能不容易实现模型泛化。对抗性训练通过规则地改变普遍选择的扰动之外的损失变化来提高普遍性的效果。最近提出的清晰度感知最小化(SAM)算法采用对抗性重量扰动,鼓励模型收敛于扁平最小值。遗憾的是,由于计算成本增加,对抗性重量扰动只能有效地近似于每批次而不是每个实例,导致性能下降。在本文中,我们提出了在每个批处理中动态重新缓解的扰动,其中揭开的实例被加权,可以用作每个实例扰动的更好近似。我们提出了充满活力的重新重量({\ Delta} -Sam)的清晰度感知最小化,这实现了高效的防护估计的想法。胶水基准测试的实验证明了{\ delta} -sam的有效性。
translated by 谷歌翻译
人类使用自然语言来撰写普通概念,将他们的环境归结为合理的日常场景描述。然而,这种生成的致辞推理(GCSR)技能缺乏最先进的文本生成方法。关于由神经文本生成模型(例如,预先接受的文本到文本变压器)生成的任意概念的描述性句子通常是语法流畅的,但可能与人类常识不相符,这主要是由于它们缺乏捕获概念关系的机制识别隐式概念,并对看不见的概念组成来执行概括的推理。在本文中,我们提出了一种想象的 - 言语(I&V)方法,其学会在输入概念之间的关系中想象一个关系场景知识图(SKG),并在生成合理的场景描述时利用SKG作为约束。我们收集和协调来自不同领域和方式的一套知识资源,为I&v提供丰富的辅助监督信号。该实验展示了I&V在提高概念到句子和概念到故事的生成任务上的语言模型的有效性,同时使模型能够从更少的任务示例中学习并生成对人类注入者常识的SKG。
translated by 谷歌翻译